24 research outputs found
Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor to the Aetiology of Vascular and Other Progressive Inflammatory and Degenerative Diseases
The production of peroxide and superoxide is an inevitable consequence of
aerobic metabolism, and while these particular "reactive oxygen species" (ROSs)
can exhibit a number of biological effects, they are not of themselves
excessively reactive and thus they are not especially damaging at physiological
concentrations. However, their reactions with poorly liganded iron species can
lead to the catalytic production of the very reactive and dangerous hydroxyl
radical, which is exceptionally damaging, and a major cause of chronic
inflammation. We review the considerable and wide-ranging evidence for the
involvement of this combination of (su)peroxide and poorly liganded iron in a
large number of physiological and indeed pathological processes and
inflammatory disorders, especially those involving the progressive degradation
of cellular and organismal performance. These diseases share a great many
similarities and thus might be considered to have a common cause (i.e.
iron-catalysed free radical and especially hydroxyl radical generation). The
studies reviewed include those focused on a series of cardiovascular, metabolic
and neurological diseases, where iron can be found at the sites of plaques and
lesions, as well as studies showing the significance of iron to aging and
longevity. The effective chelation of iron by natural or synthetic ligands is
thus of major physiological (and potentially therapeutic) importance. As
systems properties, we need to recognise that physiological observables have
multiple molecular causes, and studying them in isolation leads to inconsistent
patterns of apparent causality when it is the simultaneous combination of
multiple factors that is responsible. This explains, for instance, the
decidedly mixed effects of antioxidants that have been observed, etc...Comment: 159 pages, including 9 Figs and 2184 reference
Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1.
In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field
Recommended from our members
Languages and tools for hybrid systems design
The explosive growth of embedded electronics is bringing information and control systems of increasing complexity to every aspects of our lives. The most challenging designs are safety-critical systems, such as transportation systems (e.g., airplanes, cars, and trains), industrial plants and health care monitoring. The difficulties reside in accommodating constraints both on functionality and implementation. The correct behavior must be guaranteed under diverse states of the environment and potential failures; implementation has to meet cost, size, and power consumption requirements. The design is therefore subject to extensive mathematical analysis and simulation. However, traditional models of information systems do not interface well to the continuous evolving nature of the environment in which these devices operate. Thus, in practice, different mathematical representations have to be mixed to analyze the overall behavior of the system. Hybrid systems are a particular class of mixed models that focus on the combination of discrete and continuous subsystems. There is a wealth of tools and languages that have been proposed over the years to handle hybrid systems. However, each tool makes different assumptions on the environment, resulting in somewhat different notions of hybrid system. This makes it difficult to share information among tools. Thus, the community cannot maximally leverage the substantial amount of work that has been directed to this important topic. In this paper, we review and compare hybrid system tools by highlighting their differences in terms of their underlying semantics, expressive power and mathematical mechanisms. We conclude our review with a comparative summary, which suggests the need for a unifying approach to hybrid systems design. As a step in this direction, we make the case for a semantic-aware interchange format, which would enable the use of joint techniques, make a formal comparison between different approaches possible, and facilitate exporting and importing design representations
Silicon Photonics Codesign for Deep Learning
Deep learning is revolutionizing many aspects of our society, addressing a wide variety of decision-making tasks, from image classification to autonomous vehicle control. Matrix multiplication is an essential and computationally intensive step of deep-learning calculations. The computational complexity of deep neural networks requires dedicated hardware accelerators for additional processing throughput and improved energy efficiency in order to enable scaling to larger networks in the upcoming applications. Silicon photonics is a promising platform for hardware acceleration due to recent advances in CMOS-compatible manufacturing capabilities, which enable efficient exploitation of the inherent parallelism of optics. This article provides a detailed description of recent implementations in the relatively new and promising platform of silicon photonics for deep learning. Opportunities for multiwavelength microring silicon photonic architectures codesigned with field-programmable gate array (FPGA) for pre- and postprocessing are presented. The detailed analysis of a silicon photonic integrated circuit shows that a codesigned implementation based on the decomposition of large matrix-vector multiplication into smaller instances and the use of nonnegative weights could significantly simplify the photonic implementation of the matrix multiplier and allow increased scalability. We conclude this article by presenting an overview and a detailed analysis of design parameters. Insights for ways forward are explored
Photonic Switched Optically Connected Memory: An Approach to Address Memory Challenges in Deep Learning
Deep learning has been revolutionizing many aspects of our society, powering various fields including computer vision, natural language processing, and activity recognition. However, the scaling trends for both datasets and model size are constraining system performance. Variability of memory requirements can lead to poor resource utilization. Reconfigurable photonic interconnects provide scalable solutions and enable efficient use of disaggregated memory resources. We propose a photonic switched optically connected memory system architecture that tackles the memory challenges while showing the functionality of optical switching for deep learning models. Our proposed system architecture utilizes a 'lite' (de)serialization scheme for memory transfers via optical links to avoid network overheads and supports the dynamic allocation of remote memories to local processing systems. In order to test the feasibility of our proposal, we built an experimental testbed with a processing system and two remote memory nodes using silicon photonic switch fabrics and evaluated the system performance. The optical switching time is measured to be 119 ÎŒs and an overall 2.78 ms latency is achieved for the end-to-end reconfiguration. The collective results and existing high-bandwidth optical I/Os show the potential of integrating the photonic switched optically connected memory to state-of-the-art processing systems